skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "D'Mello, Sidney"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available April 25, 2026
  2. This study explores the relation between students’ discourse dynamics and performance during collaborative problem-solving activities utilizing Linguistic Inquiry Word Count (LIWC). We analyzed linguistic variables from students’ communications to explore social and cognitive behavior. Participants include 279 undergraduate students from two U.S. universities engaged in a controlled lab setting using the physics related educational game named Physics Playground. Findings highlight the relationship between social and cognitive linguistic variables and student’s physics performance outcome in a virtual collaborative learning context. This study contributes to a deeper understanding of how these discourse dynamics are related to learning outcomes in collaborative learning. It provides insights for optimizing educational strategies in collaborative remote learning environments. We further discuss the potential for conducting computational linguistic modeling on learner discourse and the role of natural language processing in deriving insights on learning behavior to support collaborative learning. 
    more » « less
    Free, publicly-accessible full text available March 3, 2026
  3. Free, publicly-accessible full text available March 3, 2026
  4. Free, publicly-accessible full text available November 4, 2025
  5. This work investigates relationships between consistent attendance —attendance rates in a group that maintains the same tutor and students across the school year— and learning in small group tutoring sessions. We analyzed data from two large urban districts consisting of 206 9th-grade student groups (3 − 6 students per group) for a total of 803 students and 75 tutors. The students attended small group tutorials approximately every other day during the school year and completed a pre and post-assessment of math skills at the start and end of the year, respectively. First, we found that the attendance rates of the group predicted individual assessment scores better than the individual attendance rates of students comprising that group. Second, we found that groups with high consistent attendance had more frequent and diverse tutor and student talk centering around rich mathematical discussions. Whereas we emphasize that changing tutors or groups might be necessary, our findings suggest that consistently attending tutorial sessions as a group with the same tutor might lead the group to implicitly learn as a team despite not being one. 
    more » « less
  6. Benjamin, Paaßen; Carrie, Demmans Epp (Ed.)
    One of the areas where Large Language Models (LLMs) show promise is for automated qualitative coding, typically framed as a text classification task in natural language processing (NLP). Their demonstrated ability to leverage in-context learning to operate well even in data-scarce settings poses the question of whether collecting and annotating large-scale data for training qualitative coding models is still beneficial. In this paper, we empirically investigate the performance of LLMs designed for use in prompting-based in-context learning settings, and draw a comparison to models that have been trained using the traditional pretraining--finetuning paradigm with task-specific annotated data, specifically for tasks involving qualitative coding of classroom dialog. Compared to other domains where NLP studies are typically situated, classroom dialog is much more natural and therefore messier. Moreover, tasks in this domain are nuanced and theoretically grounded and require a deep understanding of the conversational context. We provide a comprehensive evaluation across five datasets, including tasks such as talkmove prediction and collaborative problem solving skill identification. Our findings show that task-specific finetuning strongly outperforms in-context learning, showing the continuing need for high-quality annotated training datasets. 
    more » « less